Low-Rank Approximations with Sparse Factors I: Basic Algorithms and Error Analysis

نویسندگان

  • Zhenyue Zhang
  • Hongyuan Zha
  • Horst D. Simon
چکیده

We consider the problem of computing low-rank approximations of matrices. The novel aspects of our approach are that we require the low-rank approximations be written in a factorized form with sparse factors and the degree of sparsity of the factors can be traded oo for reduced reconstruction error by certain user determined parameters. We give a detailed error analysis of our proposed algorithms and compare the computed sparse low-rank approximations with those obtained from singular value decomposition. We present numerical examples arising from some application areas to illustrate the eeciency and accuracy of our algorithms. 1. Introduction. We consider the problem of computing low-rank approximations of a given matrix A 2 R mn which arises in many applications areas; see 5, 14, 17] for a few examples. The theory of singular value decomposition (SVD) provides the following characterization of the best low-rank approximations of A in terms of Frobenius norm k k F 5, Theorem 2.5.3]. Theorem 1.1. Let the singular value decomposition of A 2 R mn be A = UV T , i=k+1 2 i = minf kA ? Bk 2 F j rank(B) kg: The minimum is achieved with best k (A) U k diag(1 ; : : : ; k)V T k ; where U k and V k are the matrices formed by the rst k columns of U and V , respectively. For any low-rank approximation B of A, we call kA ? Bk F the reconstruction error of using B as an approximation of A. By Theorem 1.1, best k (A) has the smallest reconstruction error in Frobenius norm among all the rank-k approximations of A. In certain applications, it is desirable to impose further constraints on the low-rank approximation B in addition to requiring that it be of low-rank. Consider the case where, for example, the matrix A is sparse; it is generally not true that

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Subspace Iteration Randomization and Singular Value Problems

A classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of lower rank. The truncated singular value decomposition (SVD) is known to provide the best such approximation for any given fixed rank. However, the SVD is also known to be very costly to compute. Among the different approaches in the literature for computing low-rank approxima...

متن کامل

SVD based initialization: A head start for nonnegative matrix factorization

We describe Nonnegative Double Singular Value Decomposition (NNDSVD), a new method designed to enhance the initialization stage of nonnegative matrix factorization (NMF). NNDSVD can readily be combined with existing NMF algorithms. The basic algorithm contains no randomization and is based on two SVD processes, one approximating the data matrix, the other approximating positive sections of the ...

متن کامل

Fast CUR Approximation of Average Matrix and Extensions

CUR and low-rank approximations are among most fundamental subjects of numerical linear algebra and have applications to a variety of other highly important areas of modern computing as well – they range from machine learning theory and neural networks to data mining and analysis. We first dramatically accelerate computation of such approximations for the average input matrix, then show some na...

متن کامل

Adaptive Variable - Rank Approximation of General Dense Matrices Steffen

In order to handle large dense matrices arising in the context of integral equations efficiently, panel-clustering approaches (like the popular multipole expansion method) have proven to be very useful. These techniques split the matrix into blocks, approximate the kernel function on each block by a degenerate expansion, and discretize this expansion in order to find an efficient low-rank appro...

متن کامل

Adaptive Variable-Rank Approximation of General Dense Matrices

In order to handle large dense matrices arising in the context of integral equations efficiently, panel-clustering approaches (like the popular multipole expansion method) have proven to be very useful. These techniques split the matrix into blocks, approximate the kernel function on each block by a degenerate expansion, and discretize this expansion in order to find an efficient low-rank appro...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM J. Matrix Analysis Applications

دوره 23  شماره 

صفحات  -

تاریخ انتشار 2002